Teaching with AI: From Passive Users to Critical Partners

An Innovative Assessment for a GenAI-Driven World

Author
Affiliation

Dr. Michael Borck

Faculty of Business & Law, School of Management & Marketing

Published

November 27, 2025

Welcome

“Good afternoon, everyone. My name is Dr. Michael Borck, and I’m a lecturer in Business Information Systems.

Today, I want to share an innovative assessment I’ve been developing that tries to answer a question many of us are grappling with: How do we authentically assess our students in a world that is being fundamentally reshaped by Generative AI?”

The Context: An Industry in Motion

“My work is at the intersection of machine learning and business. What I’m seeing, and what our industry partners are telling us, is that GenAI is driving incredible change.

The core premise I build from is this: these tools aren’t replacing our graduates. They are amplifying their skills. A good analyst with AI support can outperform a great analyst without it.

This presents a clear challenge for us as educators. If this is the new reality of the professional world, we have an obligation to prepare our students for it. We need to move beyond simply ‘banning’ it and instead teach them how to leverage it effectively.”

  • My work focuses on the intersection of machine learning and business.
  • Our industry is evolving at an incredible pace, driven by GenAI.
  • Core Premise: These tools are not replacing professional skills; they are amplifying them.
  • The Question: If our industry is using these tools, how can we not teach our students to use them effectively, critically, and responsibly?

The Challenge: Assessing with AI

“This leads us to the central challenge. The old model of trying to ‘catch’ students using AI feels like a losing battle, and I’m not sure it’s the right one to fight.

The reality is, our students will use these tools, just as professionals in the field are now required to.

So, the goal of my assessment design was to shift the focus. I don’t want to just assess what they know; I want to assess how they think and work with AI.

This requires a move towards ‘Lane 2’ or open assessments, where we design tasks that explicitly build skills like effective prompting, critical evaluation of AI output, and the ability to refine and take full responsibility for the final product.”

  • The Old Model: “Catching” students using AI. (Is this productive?)
  • The New Reality: Students will use AI. Professionals must use AI.
  • The Goal: Shift from assessing what students know to how they think and work with these new tools.
  • The Need: An assessment that teaches and tests the new critical skills:
    • Effective prompting (intelligence gathering).
    • Critical evaluation (separating fact from hallucination).
    • Refinement and integration (taking responsibility as the author).

The Assessment: The ‘CloudCore’ Audit

“This led me to design a new assessment for my postgraduate information security unit.

I call it the ‘CloudCore Audit.’ It’s a Lane 2, open assessment where students are immersed in a real-world scenario. Their task is to conduct a full security audit of a simulated AI company called CloudCore.

What makes this innovative, and the focus of my talk, is its two-pronged approach to AI integration.

First, students interact with AI-powered characters who act as the client. Second, they are encouraged to work with an LLM of their choice as their personal assistant.”

  • Unit: Postgraduate Information Security.
  • Task: Conduct a comprehensive information security audit of ‘CloudCore’, a simulated AI-based company.
  • Two-Pronged AI Integration:
    1. Students interact with AI (as the client).
    2. Students work with AI (as an assistant).

Innovation 1: AI as the “Client”

(“…students don’t just get static documents. They interact with AI chatbots… For example, the CFO chatbot…”), you can have the GIF silently looping on the slide. It’s the ideal “show, don’t just tell” moment.

“Let’s look at that first prong. When a consultant starts a real audit, they don’t get a neat package of all the information. They have to interview people.

To simulate this, students don’t just get static documents. They interact with AI chatbots who play the roles of key company employees.

For example, the CFO chatbot is programmed to think about budget and risk. ‘Raj,’ the IT Manager, is focused on operational fires and his team’s bandwidth.

This immediately teaches a critical skill: how to gather intelligence. Students learn that the quality of their questions—their prompts—directly determines the quality of the answers. A vague question to the CFO gets a vague, ‘business-speak’ answer. A specific question about risk tolerance for data breaches gets a much more useful, nuanced response.”

  • Students interact with AI chatbots role-playing as company employees.
  • Example Roles:
    • CFO: Focuses on budget, risk appetite, and compliance costs.
    • ‘Raj’ (IT Manager): Concerned with operational issues, technical debt, and team capacity.
  • The Skill: Teaches effective intelligence gathering.
  • The Feedback Loop: Better prompts yield more insightful, specific responses. Poor prompts get vague, unhelpful answers.

Innovation 2: AI as the “Intern”

“The second innovation is how we frame their use of tools like ChatGPT. I explicitly encourage them to use an LLM, but I frame it very specifically:

‘Treat the AI as your junior intern.’

This framing is powerful because it’s relatable. An intern can do a lot of heavy lifting. But you would never just copy-paste an intern’s draft straight into a final report for a client.

Why? Because the intern, like the AI, doesn’t have the full context. They haven’t sat through all the lectures or read all the material. They will guess to fill in the gaps, and as we all know, an LLM can be confidently wrong.

This reframes the student’s role immediately.”

  • Students are encouraged to use an LLM of their choice (e.g., ChatGPT, Claude).
  • The Framing is Crucial: “Treat the AI as a junior intern.”
  • What does this mean?
    • An intern is powerful and can do a lot of work fast.
    • An intern does not understand the full context of the assignment.
    • An intern will guess, and sometimes guess poorly or “confidently make things up” (hallucinate).
  • The Skill: Teaches critical evaluation and responsibility.

The Student as “Author”

“The ‘intern’ framing makes one thing crystal clear: the student is the author, and they are ultimately responsible for 100% of the final product.

The work they submit must reflect their own understanding. This shifts the entire conversation. It’s no longer about ‘did you use AI?’ It’s about ‘how well did you manage your AI intern?’

This model requires them to use their knowledge from the course to critically evaluate the AI’s output, to tell when it’s helpful and when it’s hallucinating. They must review, edit, and stand by every single word.”

  • The ‘intern’ framing establishes a clear hierarchy of responsibility.
  • The Rule: “You are the author. You are 100% responsible for the final product. Every word you submit must reflect your understanding.”
  • This moves the assessment from “Did you write this?” to “Can you defend this?”
  • It explicitly tests their ability to:
    1. Use their course knowledge to evaluate the AI’s output.
    2. Identify and correct errors.
    3. Refine and add the necessary critical analysis.

Pedagogy: “Thinking with AI”

“Ultimately, my goal is to teach students not just what to think, but how to think with AI.

And this assessment model makes the quality of that thinking visible.

My best students don’t use ‘one-shot’ prompts. They don’t just ask the AI to ‘write an audit.’ Instead, they have a conversation. They explore ideas, they refine concepts, they push back on the AI.

Many also do what a real professional would: they use multiple LLMs to cross-reference the findings. They’ll ask their ‘interns’ to debate each other. This is a massive step up from being a passive user.”

  • The goal is to teach students how to think with AI.
  • Low-Level Use (The ‘C’ Student):
    • Uses ‘one-shot’ prompts.
    • Gets basic, generic results.
    • Submits a report that is superficial and easily identifiable.
  • High-Level Use (The ‘A’ Student):
    • Has a conversation with the AI.
    • Explores ideas, refines concepts, asks “what if.”
    • Uses multiple LLMs to cross-reference findings.
    • This mimics professional industry practice.

Design & Authenticity

“My main design priority is authenticity. It has to be as close to industry practice as possible.

In the real world, you don’t have 24/7 access to a client’s team. Access is scheduled, and frankly, it’s often unpredictable.

So, I’m already evolving this model. Instead of just having the chatbots available anytime, students will need to schedule specific appointments.

And to make it even more realistic, an AI ‘employee’ might occasionally ‘cancel’ a meeting, forcing the student to adapt and reschedule. This isn’t about being difficult; it’s about teaching the logistical adaptability that is a very real part of any client engagement.”

  • Priority: The assessment must mirror industry practice.
  • In the real world, client access is not unlimited or 24/7. It’s scheduled and unpredictable.
  • The Next Evolution:
    1. Scheduled Access: Students must schedule specific appointments with the AI employees (e.g., “The CFO is only available on Tuesday afternoon”).
    2. Simulating Unpredictability: An AI ‘employee’ might occasionally cancel a meeting, forcing the student to reschedule.
  • The Skill: Teaches adaptability, planning, and managing logistical challenges—all key professional competencies.

Academic Integrity as Transparency

“This brings us to the big question: academic integrity.

My approach is to focus on transparency. I ask students to submit the transcripts of their conversations with their chosen LLM.

Crucially, this isn’t about ‘catching’ them. It’s a teaching tool. It allows me to see their thought process.

A student who uses a basic, one-shot prompt will get a basic, generic result from the AI, and that is reflected in their audit and their grade.

But the transcripts also show me the students who are really thinking. I can see them pushing back on the AI, refining its output, and applying course concepts. This assessment model makes their process visible, and that process is what I’m trying to teach.”

  • How do we manage integrity in this model? By focusing on transparency.
  • I ask students to submit the transcripts of their conversations with their “intern.”
  • This is a teaching tool, not a “gotcha” tool.
  • It allows me to see their thought process.
  • I can see the student who used a basic prompt and got a basic result. Their grade reflects that superficial engagement.
  • I can also see the student who wrestled with the AI, refined its output, and demonstrated critical thought. Their grade reflects that deep engagement.

Key Takeaways

“So, to conclude, what are the key takeaways from this assessment design?

First, we should embrace GenAI as the amplifier it is.

Second, framing is critical. The ‘AI as Intern’ model works because it empowers students but holds them accountable.

Third, by designing assessments that interact with AI and asking for transparency, we can start to assess the process of thinking, not just the final product.

Ultimately, our goal must be to prepare our students for the modern workplace. That means helping them move from being passive users of these tools to becoming active, critical partners with them.”

  1. Embrace the “Amplifier”: GenAI is a tool that amplifies skills. Our assessments should reflect this.
  2. Frame it Right: The “AI as Intern” model empowers students while enforcing their role as the responsible author.
  3. Assess the Process: By making the process visible (via chatbots and transcripts), we can assess the how (critical thinking) and not just the what (the final product).
  4. The Goal: Move students from passive users to active, critical partners with AI.

Thank You

Dr. Michael Borck michael.borck@curtin.edu.au

“Thank you for your time. I’d be happy to answer any questions you have about the design, the implementation, or the student feedback so far.”